policy maker
Learning Social Welfare Functions
Is it possible to understand or imitate a policy maker's rationale by looking at past decisions they made? We focus on two learning tasks; in the first, the input is vectors of utilities of an action (decision or policy) for individuals in a group and their associated social welfare as judged by a policy maker, whereas in the second, the input is pairwise comparisons between the welfares associated with a given pair of utility vectors. We show that power mean functions are learnable with polynomial sample complexity in both cases, even if the social welfare information is noisy. Finally, we design practical algorithms for these tasks and evaluate their performance.
Envisioning Stakeholder-Action Pairs to Mitigate Negative Impacts of AI: A Participatory Approach to Inform Policy Making
Barnett, Julia, Kieslich, Kimon, Helberger, Natali, Diakopoulos, Nicholas
The potential for negative impacts of AI has rapidly become more pervasive around the world, and this has intensified a need for responsible AI governance. While many regulatory bodies endorse risk-based approaches and a multitude of risk mitigation practices are proposed by companies and academic scholars, these approaches are commonly expert-centered and thus lack the inclusion of a significant group of stakeholders. Ensuring that AI policies align with democratic expectations requires methods that prioritize the voices and needs of those impacted. In this work we develop a participative and forward-looking approach to inform policy-makers and academics that grounds the needs of lay stakeholders at the forefront and enriches the development of risk mitigation strategies. Our approach (1) maps potential mitigation and prevention strategies of negative AI impacts that assign responsibility to various stakeholders, (2) explores the importance and prioritization thereof in the eyes of laypeople, and (3) presents these insights in policy fact sheets, i.e., a digestible format for informing policy processes. We emphasize that this approach is not targeted towards replacing policy-makers; rather our aim is to present an informative method that enriches mitigation strategies and enables a more participatory approach to policy development.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- (6 more...)
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (0.92)
- Overview (0.88)
- Social Sector (1.00)
- Media > News (1.00)
- Information Technology > Security & Privacy (1.00)
- (3 more...)
Assessing the State of AI Policy
DeFranco, Joanna F., Biersmith, Luke
The deployment of artificial intelligence (AI) applications has accelerated rapidly. AI enabled technologies are facing the public in many ways including infrastructure, consumer products and home applications. Because many of these technologies present risks either in the form of physical injury, or bias, potentially yielding unfair outcomes, policy makers must consider the need for oversight. Most policymakers, however, lack the technical knowledge to judge whether an emerging AI technology is safe, effective, and requires oversight, therefore policy makers must depend on expert opinion. But policymakers are better served when, in addition to expert opinion, they have some general understanding of existing guidelines and regulations. This work provides an overview [the landscape] of AI legislation and directives at the international, U.S. state, city and federal levels. It also reviews relevant business standards, and technical society initiatives. Then an overlap and gap analysis are performed resulting in a reference guide that includes recommendations and guidance for future policy making.
- Asia > Russia (0.14)
- North America > United States > New York (0.04)
- North America > United States > Maryland (0.04)
- (15 more...)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (0.93)
iASiS: Towards Heterogeneous Big Data Analysis for Personalized Medicine
Krithara, Anastasia, Aisopos, Fotis, Rentoumi, Vassiliki, Nentidis, Anastasios, Bougatiotis, Konstantinos, Vidal, Maria-Esther, Menasalvas, Ernestina, Rodriguez-Gonzalez, Alejandro, Samaras, Eleftherios G., Garrard, Peter, Torrente, Maria, Pulla, Mariano Provencio, Dimakopoulos, Nikos, Mauricio, Rui, De Argila, Jordi Rambla, Tartaglia, Gian Gaetano, Paliouras, George
The vision of IASIS project is to turn the wave of big biomedical data heading our way into actionable knowledge for decision makers. This is achieved by integrating data from disparate sources, including genomics, electronic health records and bibliography, and applying advanced analytics methods to discover useful patterns. The goal is to turn large amounts of available data into actionable information to authorities for planning public health activities and policies. The integration and analysis of these heterogeneous sources of information will enable the best decisions to be made, allowing for diagnosis and treatment to be personalised to each individual. The project offers a common representation schema for the heterogeneous data sources. The iASiS infrastructure is able to convert clinical notes into usable data, combine them with genomic data, related bibliography, image data and more, and create a global knowledge base. This facilitates the use of intelligent methods in order to discover useful patterns across different resources. Using semantic integration of data gives the opportunity to generate information that is rich, auditable and reliable. This information can be used to provide better care, reduce errors and create more confidence in sharing data, thus providing more insights and opportunities. Data resources for two different disease categories are explored within the iASiS use cases, dementia and lung cancer.
- Europe > Spain > Galicia > Madrid (0.05)
- Europe > Greece > Attica > Athens (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.86)
Why creating an international body for AI is a bad idea
Jessica Melugin, Competitive Enterprise Institute Director of Center for Technology and Innovation, discusses Twitter accusing Meta of stealing trade secrets and a New York City law requiring businesses to audit A.I. hiring tools. Former Google CEO Eric Schmidt recently re-upped his calls for a global body, akin to the Intergovernmental Panel on Climate Change (IPCC), to advise member nations on regulating artificial intelligence (AI). Schmidt first made his case for an "International Panel on AI Safety" – an "IPCC for AI," if you will – in an October 2023 op-ed in the Financial Times. He writes of the AI panel's potential to be an, "an independent, expert-led body empowered to objectively inform governments about the current state of AI capabilities and make evidence-based predictions." He claims that AI policy makers, "are looking for impartial, technically reliable and timely assessments about its speed of progress and impact."
- North America > United States > New York (0.25)
- Asia > China (0.10)
- North America > United States > District of Columbia > Washington (0.05)
- Europe > United Kingdom (0.05)
- Government (1.00)
- Law > Intellectual Property & Technology Law (0.56)
Brief for the Canada House of Commons Study on the Implications of Artificial Intelligence Technologies for the Canadian Labor Force: Generative Artificial Intelligence Shatters Models of AI and Labor
Exciting advances in generative artificial intelligence (AI) have sparked concern for jobs, education, productivity [1], and the future of work. As with past technologies, generative AI may not lead to mass unemployment. But, unlike past technologies, generative AI is creative, cognitive, and potentially ubiquitous which makes the usual assumptions of automation predictions ill-suited for today. Existing projections suggest that generative AI will impact workers in occupations that were previously considered immune to automation. As AI's full set of capabilities and applications emerge, policy makers should promote workers' career adaptability. This goal requires improved data on job separations and unemployment by locality and job titles in order to identify early-indicators for the workers facing labor disruption. Further, prudent policy should incentivize education programs to accommodate learning with AI as a tool while preparing students for the demands of the future of work.
- North America > Canada (0.41)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (3 more...)
- Education (1.00)
- Banking & Finance > Economy (1.00)
- Government > Regional Government > North America Government > United States Government (0.95)
- Information Technology > Artificial Intelligence > Natural Language > Generation (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
We Don't Actually Know If AI Is Taking Over Everything
Since the release of ChatGPT last year, I've heard some version of the same thing over and over again: What is going on? The rush of chatbots and endless "AI-powered" apps has made starkly clear that this technology is poised to upend everything--or, at least, something. Yet even the AI experts are struggling with a dizzying feeling that for all the talk of its transformative potential, so much about this technology is veiled in secrecy. More and more of this technology, once developed through open research, has become almost completely hidden within corporations that are opaque about what their AI models are capable of and how they are made. Transparency isn't legally required, and the secrecy is causing problems: Earlier this year, The Atlantic revealed that Meta and others had used nearly 200,000 books to train their AI models without the compensation or consent of the authors.
What will be the impact of AI-assisted robotics on humanity? – CIFAR
As the world contends with the lingering catastrophic effects of a global pandemic, set against the ongoing backdrop of climate disasters and worsening conflict and humanitarian disasters, the need for rapid scientific advancement in response to crisis has never been so apparent. We can't say we weren't warned. In 2015, the United Nations released The 2030 Agenda for Sustainable Development, a shared blueprint constituting "an urgent call for action by all countries -- developed and developing -- in a global partnership." Central to the agenda are the 17 Sustainable Development Goals (SDGs) that address a range of urgent needs for humanity, from poverty alleviation and gender equality, to decent work, sustainable cities and communities, a clean environment, affordable and clean energy, and peace. The goals were founded on decades of input from global researchers, stakeholders and policy makers.
- Government (0.57)
- Law (0.52)
CitySpec with Shield: A Secure Intelligent Assistant for Requirement Formalization
Chen, Zirong, Li, Issa, Zhang, Haoxiang, Preum, Sarah, Stankovic, John A., Ma, Meiyi
An increasing number of monitoring systems have been developed in smart cities to ensure that the real-time operations of a city satisfy safety and performance requirements. However, many existing city requirements are written in English with missing, inaccurate, or ambiguous information. There is a high demand for assisting city policymakers in converting human-specified requirements to machine-understandable formal specifications for monitoring systems. To tackle this limitation, we build CitySpec, the first intelligent assistant system for requirement specification in smart cities. To create CitySpec, we first collect over 1,500 real-world city requirements across different domains (e.g., transportation and energy) from over 100 cities and extract city-specific knowledge to generate a dataset of city vocabulary with 3,061 words. We also build a translation model and enhance it through requirement synthesis and develop a novel online learning framework with shielded validation. The evaluation results on real-world city requirements show that CitySpec increases the sentence-level accuracy of requirement specification from 59.02% to 86.64%, and has strong adaptability to a new city and a new domain (e.g., the F1 score for requirements in Seattle increases from 77.6% to 93.75% with online learning). After the enhancement from the shield function, CitySpec is now immune to most known textual adversarial inputs (e.g., the attack success rate of DeepWordBug after the shield function is reduced to 0% from 82.73%). We test the CitySpec with 18 participants from different domains. CitySpec shows its strong usability and adaptability to different domains, and also its robustness to malicious inputs.
- North America > United States > Virginia > Albemarle County > Charlottesville (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Asia > China > Beijing > Beijing (0.04)
- (13 more...)